2 research outputs found

    Supplementing Frequency Domain Interpolation Methods for Character Animation

    Get PDF
    The animation of human characters entails difficulties exceeding those met simulating objects, machines or plants. A person's gait is a product of nature affected by mood and physical condition. Small deviations from natural movement are perceived with ease by an unforgiving audience. Motion capture technology is frequently employed to record human movement. Subsequent playback on a skeleton underlying the character being animated conveys many of the subtleties of the original motion. Played-back recordings are of limited value, however, when integration in a virtual environment requires movements beyond those in the motion library, creating a need for the synthesis of new motion from pre-recorded sequences. An existing approach involves interpolation between motions in the frequency domain, with a blending space defined by a triangle network whose vertices represent input motions. It is this branch of character animation which is supplemented by the methods presented in this thesis, with work undertaken in three distinct areas. The first is a streamlined approach to previous work. It provides benefits including an efficiency gain in certain contexts, and a very different perspective on triangle network construction in which they become adjustable and intuitive user-interface devices with an increased flexibility allowing a greater range of motions to be blended than was possible with previous networks. Interpolation-based synthesis can never exhibit the same motion variety as can animation methods based on the playback of rearranged frame sequences. Limitations such as this were addressed by the second phase of work, with the creation of hybrid networks. These novel structures use properties of frequency domain triangle blending networks to seamlessly integrate playback-based animation within them. The third area focussed on was distortion found in both frequency- and time-domain blending. A new technique, single-source harmonic switching, was devised which greatly reduces it, and adds to the benefits of blending in the frequency domain

    Using the Discrete Fourier Transform for Character Motion Blending and Manipulation - a Streamlined Approach

    No full text
    Motion capture data allows natural-looking motion to be bestowed upon simulated characters. Research has sought ways of extending the range of motions it can reproduce. One such method involves blending between captured sequences in the frequency domain. This paper streamlines the approach taken by similar previous work. Higher efficiency is obtained both by shifting computations from runtime to pre-processing and by using a simpler technique, which is also more flexible allowing the method to be used for a greater range of motions. Furthermore, the already-known use of a triangular network defining a continuous blending space is instead presented as an adjustable interface element which is both intuitive and more flexible than applied to earlier work. As before input data may be sparse yet still allows the creation of a continuous spectrum of subtly varying motions, enabling characters to integrate well in their environment. Weighting calculation, blending and Fourier synthesis of realistic-looking motion using five harmonics requires 0.39 µs per degree of freedom for each frame in the created sequence - a one-off cost incurred only when blending ratios change. This figure can be improved further using the proposed level-of-detail adjustments, which, combined with its small memory footprint, makes the method particularly suitable for the simulation of crowds
    corecore